Otolaryngology
'Coffee is just the excuse': the deaf-run cafe where hearing people sign to order
The video menu at Dialogue Cafe teaches hearing people how to order a drink using sign language. The video menu at Dialogue Cafe teaches hearing people how to order a drink using sign language. 'Coffee is just the excuse': the deaf-run cafe where hearing people sign to order W esley Hartwell raised his fists to the barista and shook them next to his ears. He then lowered his fists, extended his thumbs and little fingers, and moved them up and down by his chest, as though milking a cow. Finally, he laid the fingers of one hand flat on his chin and flexed his wrist forward.
- North America > United States (0.14)
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > Scotland (0.05)
- (5 more...)
- Education (0.96)
- Leisure & Entertainment > Sports (0.70)
- Government > Regional Government (0.48)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.35)
Classifying Phonotrauma Severity from Vocal Fold Images with Soft Ordinal Regression
Matton, Katie, Balaji, Purvaja, Ghasemzadeh, Hamzeh, Cooper, Jameson C., Mehta, Daryush D., Van Stan, Jarrad H., Hillman, Robert E., Picard, Rosalind, Guttag, John, Abulnaga, S. Mazdak
Phonotrauma refers to vocal fold tissue damage resulting from exposure to forces during voicing. It occurs on a continuum from mild to severe, and treatment options can vary based on severity. Assessment of severity involves a clinician's expert judgment, which is costly and can vary widely in reliability. In this work, we present the first method for automatically classifying phonotrauma severity from vocal fold images. To account for the ordinal nature of the labels, we adopt a widely used ordinal regression framework. To account for label uncertainty, we propose a novel modification to ordinal regression loss functions that enables them to operate on soft labels reflecting annotator rating distributions. Our proposed soft ordinal regression method achieves predictive performance approaching that of clinical experts, while producing well-calibrated uncertainty estimates. By providing an automated tool for phonotrauma severity assessment, our work can enable large-scale studies of phonotrauma, ultimately leading to improved clinical understanding and patient care.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Asia > Vietnam > Hà Nam Province (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
New Hearing Aid Company, Foretell, Brings in Steve Martin and Others as Fans
Well, Who Do You Know? AI-powered startup Fortell has become a secret handshake for the privileged hearing-impaired crowd who swear by the product. Now, it wants to be in your ears. A secret is percolating at dinner parties, salons, and cocktail gatherings among the august New York City elite. It's whispered in the circles of financial masters of the universe, Hollywood stars, and owners of sports teams. Many haven't--or if they did hear, they might not have made out the words through noisy cross-conversations. Once they do know--particularly if they're boomers--they want it desperately. Fortell is a hearing aid, one that claims to use AI to provide a dramatically superior aural experience. The chosen few included in its beta test claim that it seems to top the performance of high-end devices they'd been unhappily using. These testers have made pilgrimages to Fortell's headquarters on the fifth floor of a WeWork facility in New York City's trendy SoHo neighborhood, where they were fitted for the hearing aids--which from the outside look pretty much like standard, over-the-ear, teardrop-shaped devices. But the big moment comes when a Fortell staffer takes them down to street level.
- North America > United States > New York (0.45)
- Asia > Russia (0.14)
- North America > United States > California (0.04)
- (3 more...)
Autonomous labeling of surgical resection margins using a foundation model
Yang, Xilin, Aydin, Musa, Lu, Yuhong, Selcuk, Sahan Yoruc, Bai, Bijie, Zhang, Yijie, Birkeland, Andrew, Ehrlich, Katjana, Bec, Julien, Marcu, Laura, Pillar, Nir, Ozcan, Aydogan
Assessing resection margins is central to pathological specimen evaluation and has profound implications for patient outcomes. Current practice employs physical inking, which is applied variably, and cautery artifacts can obscure the true margin on histological sections. We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images, reducing reliance on inks and standardizing margin-focused review. VIN uses a frozen foundation model as the feature extractor and a compact two-layer multilayer perceptron trained for patch-level classification of cautery-consistent features. The dataset comprised 120 hematoxylin and eosin (H&E) stained slides from 12 human tonsil tissue blocks, resulting in ~2 TB of uncompressed raw image data, where a board-certified pathologist provided boundary annotations. In blind testing with 20 slides from previously unseen blocks, VIN produced coherent margin overlays that qualitatively aligned with expert annotations across serial sections. Quantitatively, region-level accuracy was ~73.3% across the test set, with errors largely confined to limited areas that did not disrupt continuity of the whole-slide margin map. These results indicate that VIN captures cautery-related histomorphology and can provide a reproducible, ink-free margin delineation suitable for integration into routine digital pathology workflows and for downstream measurement of margin distances.
- North America > United States > California > Los Angeles County > Los Angeles (0.30)
- North America > United States > California > Yolo County > Davis (0.28)
- North America > Mexico > Gulf of Mexico (0.14)
- (5 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.68)
- South America > Venezuela (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.79)
- (2 more...)
The Best WIRED-Tested Extreme Alarm Clock of 2025: Not for the Faint of Heart
From runaway robots to "sonic bombs," we reviewed offbeat alarm clocks designed to awaken even the heaviest sleepers. Not every alarm clock is created equal. Heavy sleepers know how easy it is to snooze through the overly genteel alarms on your phone. For people who can't get out of bed without a bigger jolt, extreme alarms have popped up in recent years--from relatively simple puzzle-alarm phone apps to alarms on wheels to alarms that shake the bed. Not only are these an innovative way to get chronic snoozers out of bed, but they can be great for those who are hard of hearing, utilizing different frequencies and pitches as well as movement through vibration.
- North America > United States > Missouri > Jackson County > Kansas City (0.04)
- North America > United States > California (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Retail (0.95)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.36)
Forecasting Spoken Language Development in Children with Cochlear Implants Using Preimplantation MRI
Wang, Yanlin, Yuan, Di, Dettman, Shani, Choo, Dawn, Xu, Emily Shimeng, Thomas, Denise, Ryan, Maura E, Wong, Patrick C M, Young, Nancy M
Cochlear implants (CI) significantly improve spoken language in children with severe-to-profound sensorineural hearing loss (SNHL), yet outcomes remain more variable than in children with normal hearing. This variability cannot be reliably predicted for individual children using age at implantation or residual hearing. This study aims to compare the accuracy of traditional machine learning (ML) to deep transfer learning (DTL) algorithms to predict post-CI spoken language development of children with bilateral SNHL using a binary classification model of high versus low language improvers. A total of 278 implanted children enrolled from three centers. The accuracy, sensitivity and specificity of prediction models based upon brain neuroanatomic features using traditional ML and DTL learning. DTL prediction models using bilinear attention-based fusion strategy achieved: accuracy of 92.39% (95% CI, 90.70%-94.07%), sensitivity of 91.22% (95% CI, 89.98%-92.47%), specificity of 93.56% (95% CI, 90.91%-96.21%), and area under the curve (AUC) of 0.977 (95% CI, 0.969-0.986). DTL outperformed traditional ML models in all outcome measures. DTL was significantly improved by direct capture of discriminative and task-specific information that are advantages of representation learning enabled by this approach over ML. The results support the feasibility of a single DTL prediction model for language prediction of children served by CI programs worldwide.
- North America > United States > Illinois > Cook County > Chicago (0.06)
- Asia > China > Hong Kong (0.06)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (3 more...)
- Research Report > Strength Medium (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Speech Separation for Hearing-Impaired Children in the Classroom
Olalere, Feyisayo, van der Heijden, Kiki, Stronks, H. Christiaan, Briaire, Jeroen, Frijns, Johan H. M., Güçlütürk, Yagmur
The process includes simulating room and listener acoustic properties (A), modeling talkers' movement trajectories (B), and synthesizing classroom speech mixtures (C). The numbers (1) - (5) correspond to the steps itemized in section II-B more challenging and reflective of classroom acoustics. The separation model is trained to output time-domain waveforms for each speaker with no interference from the other speaker or background noise. This setup enables the model to not only separate overlapping speech, but also to preserve spatial distinctions associated with each moving source. B. Simulation of Overlapping Speech for Classroom Conditions To capture the reverberant and spatial characteristics typical of classroom environments, we developed a spatialization pipeline for generating training and evaluation data (see Fig.1). This pipeline consists of five main components, which are explained below in detail: 1) Simulation of room impulse responses (RIRs) 2) Application of head-related impulse responses (HRIRs) 3) Generation of binaural room impulse responses (BRIRs) 4) Modeling of talkers' movement trajectories 5) Synthesis of the classroom speech data 1) Room Impulse Responses: To simulate naturalistic reverberant classroom acoustics, we generated RIRs that capture direct sound, early reflections, and reverberation or echo. These RIRs were used to spatialize source signals in simulated classroom environments with varying geometry, reverberation, and source-listener distances. We used the Pyroomacoustics Python package [35], which implements the image source method to model sound propagation in rectangular (shoebox) rooms. A total of 30 classrooms were simulated, with dimensions randomly sampled from a range of 8.5 8.5 3 m to 10 10 3.5 m (length width height), reflecting typical U.S. classroom sizes [36], [37].
- Europe > Netherlands > South Holland > Leiden (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Education > Educational Setting (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.47)
What Causes Postoperative Aspiration?
Nagesh, Supriya, Covarrubias, Karina, El-Kareh, Robert, Kasiviswanathan, Shiva Prasad, Mishra, Nina
Background: Aspiration, the inhalation of foreign material into the lungs, significantly impacts surgical patient morbidity and mortality. This study develops a machine learning (ML) model to predict postoperative aspiration, enabling timely preventative interventions. Methods: From the MIMIC-IV database of over 400,000 hospital admissions, we identified 826 surgical patients (mean age: 62, 55.7\% male) who experienced aspiration within seven days post-surgery, along with a matched non-aspiration cohort. Three ML models: XGBoost, Multilayer Perceptron, and Random Forest were trained using pre-surgical hospitalization data to predict postoperative aspiration. To investigate causation, we estimated Average Treatment Effects (ATE) using Augmented Inverse Probability Weighting. Results: Our ML model achieved an AUROC of 0.86 and 77.3\% sensitivity on a held-out test set. Maximum daily opioid dose, length of stay, and patient age emerged as the most important predictors. ATE analysis identified significant causative factors: opioids (0.25 +/- 0.06) and operative site (neck: 0.20 +/- 0.13, head: 0.19 +/- 0.13). Despite equal surgery rates across genders, men were 1.5 times more likely to aspirate and received 27\% higher maximum daily opioid dosages compared to women. Conclusion: ML models can effectively predict postoperative aspiration risk, enabling targeted preventative measures. Maximum daily opioid dosage and operative site significantly influence aspiration risk. The gender disparity in both opioid administration and aspiration rates warrants further investigation. These findings have important implications for improving postoperative care protocols and aspiration prevention strategies.
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Israel (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Quanvolutional Neural Networks for Pneumonia Detection: An Efficient Quantum-Assisted Feature Extraction Paradigm
Tanbhir, Gazi, Shahriyar, Md. Farhan, Chy, Abdullah Md Raihan
Pneumonia poses a significant global health challenge, demanding accurate and timely diagnosis. While deep learning, particularly Convolutional Neural Networks (CNNs), has shown promise in medical image analysis for pneumonia detection, CNNs often suffer from high computational costs, limitations in feature representation, and challenges in generalizing from smaller datasets. To address these limitations, we explore the application of Quanvolutional Neural Networks (QNNs), leveraging quantum computing for enhanced feature extraction. This paper introduces a novel hybrid quantum-classical model for pneumonia detection using the PneumoniaMNIST dataset. Our approach utilizes a quanvolutional layer with a parameterized quantum circuit (PQC) to process 2x2 image patches, employing rotational Y-gates for data encoding and entangling layers to generate non-classical feature representations. These quantum-extracted features are then fed into a classical neural network for classification. Experimental results demonstrate that the proposed QNN achieves a higher validation accuracy of 83.33 percent compared to a comparable classical CNN which achieves 73.33 percent. This enhanced convergence and sample efficiency highlight the potential of QNNs for medical image analysis, particularly in scenarios with limited labeled data. This research lays the foundation for integrating quantum computing into deep-learning-driven medical diagnostic systems, offering a computationally efficient alternative to traditional approaches.
- Health & Medicine > Therapeutic Area > Pulmonary/Respiratory Diseases (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)